Analysis of a Nonreversible Markov Chain Sampler
نویسندگان
چکیده
We analyze the convergence to stationarity of a simple nonreversible Markov chain that serves as a model for several nonreversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that nonreversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalization.
منابع مشابه
Filtered and Setwise Gibbs Samplers for Teletraffic Analysis
The Gibbs sampler is a very simple yet efficient method for the performance evaluation of product form loss networks. This paper introduces the setwise Gibbs sampler as a flexible technique for analysing closed BCMP networks, which model telecommunication networks using window flow control. The efficiency of another variant, the filtered Gibbs sampler (FGS), is also investigated. It is shown th...
متن کاملStochastic Bouncy Particle Sampler
We introduce a stochastic version of the nonreversible, rejection-free Bouncy Particle Sampler (BPS), a Markov process whose sample trajectories are piecewise linear, to efficiently sample Bayesian posteriors in big datasets. We prove that in the BPS no bias is introduced by noisy evaluations of the log-likelihood gradient. On the other hand, we argue that efficiency considerations favor a smal...
متن کاملLocation-Aided Fast Distributed Consensus
Existing works on distributed averaging explore linear iterations based on reversible Markov chains. The convergence of such algorithms is bounded to be slow due to the diffusive behavior of the reversible chains. It has been observed that by overcoming the diffusive behavior, certain nonreversible chains lifted from reversible ones mix substantially faster than the original chains [1], [2]. In...
متن کاملOn the use of auxiliary variables in Markov chain Monte Carlo sampling
We study the slice sampler, a method of constructing a reversible Markov chain with a speciied invariant distribution. Given an independence Metropolis-Hastings algorithm it is always possible to construct a slice sampler that dominates it in the Peskun sense. This means that the resulting Markov chain produces estimates with a smaller asymptotic variance. Furthermore the slice sampler has a sm...
متن کامل2 5 Ju n 20 15 Markov Interacting Importance Samplers
We introduce a new Markov chain Monte Carlo (MCMC) sampler called the Markov Interacting Importance Sampler (MIIS). The MIIS sampler uses conditional importance sampling (IS) approximations to jointly sample the current state of the Markov Chain and estimate conditional expectations, possibly by incorporating a full range of variance reduction techniques. We compute Rao-Blackwellized estimates ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000